88 research outputs found

    Development Of A Cloud Computing Application For Water Resources Modelling And Optimization Based On Open Source Software

    Full text link
    Cloud computing is the latest advancement in Information and Communication Technology (ICT) that provides computing as a service or delivers computation, software, data access, storage service without end-user knowledge of the physical location and system configuration. Cloud computing, service oriented architecture and web geographic information systems are new technologies for development of the cloud computing application for water resources modelling and optimization. The cloud application is deployed and tested in a distributed computer environment running on three virtual machines (VMs). The cloud application has five web services for: (1) spatial data infrastructure – 1 (SDI), (2) SDI – 2, (3) support for water resources modelling (4) water resources optimization and 5) user authentication. The cloud application is developed using several programming languages (PHP, Ajax, Java, and JavaScript), libraries (OpenLayers and JQuery) and open-source software components (GeoServer, PostgreSQL and PostGIS) and OGC standards (WMS, WFS and WFT-T). The web services for support of water resources modelling and user authentication are deployed on Amazon Web Services and are communicating using WFS with the two SDI web services. The two SDI web services are working on the two separate VMs providing geospatial data and services. The fourth web service is deployed on a separate VM because of the expected large computational requirements. The cloud application is scalable, interoperable, creates a real time multi-user collaboration platform. All code and components used are open source. The cloud application was tested with concurrent multiple users. The performance, security and utilization of the distributed computer environment are monitored and analysed together with the users’ experience and satisfaction. The applicability of the presented solution and its future are elaborated

    Locating Multiple Leaks in Water Distribution Networks Combining Physically Based and Data-Driven Models and High-Performance Computing

    Get PDF
    Water utilities are urged to decrease their real water losses, not only to reduce costs but also to assure long-term sustainability. Hardware- and software-based techniques have been broadly used to locate leaks; within the latter, previous works that have used data-driven models mostly focused on single leaks. This paper presents a methodology to locate multiple leaks in water distribution networks employing pressure residuals. It consists of two phases: one is to produce training data for the data-driven model and cluster the nodes based on their leak-flow-rate-independent signatures using an adapted hierarchical agglomerative algorithm; the second is to locate the leaks using a top-down approach. To identify the leaking clusters and nodes, we employed a custom-built k-nearest neighbor (k-NN) algorithm that compares the test instances with the generated training data. This instance-to-instance comparison requires substantial computational resources for classification, which was overcome by the use of high-performance computing. The methodology was applied to a real network located in a European town, comprising 144 nodes and a total length of pipes of 24 km. Although its multiple inlets add redundancy to the network increasing the challenge of leak location, the method proved to obtain acceptable results to guide the field pinpointing activities. Nearly 70% of the areas determined by the clusters were identified with an accuracy of over 90% for leak flows above 3.0 L/s, and the leaking nodes were accurately detected over 50% of the time for leak flows above 4.0 L/

    Committees Of Specialized Conceptual Hydrological Models: Comparative Study

    Full text link
    Committee modelling approach is skillful prediction in the domain of hydrological modelling that allows explicitly to derive predictive model outputs. In this approach, the different individual models are optimally combined. Generally if a single hydrological model or the model calibrated by the single aggregated objective function it is hard to capture all facets of a complex process and to present the best possible model outputs. This model could be either capable for high flows or for low flows or not for both cases hence more flexible modelling architectures are required. Here the possibilities is building several specialized models each of which is responsible for a particular sub-process (high flows or low flows), and combining them using dynamic weights – thus forming a committee model. In this study we compare two different types of committee models: (i) the combine model based on fuzzy memberships function (Kayastha et al. 2013, Fenicia et al. 2007) and (ii) the combine model based on weights that calculated from hydrological states (Oudin et al. 2006). Before combining the models the individual hydrological models are calibrated by Adaptive Cluster Covering Algorithm (Solomatine 1999) for high and low flows with (different) suitable objective functions. The committee model based on fuzzy memberships does not generate additional water in the system (preserves water balance), however there is no guarantee for this in case of committees based on hydrological states. The relative performances of the two different committee models and their characteristics are illustrated, with an application to HBV hydrological models in Bagmati catchment in Nepal

    Meteorological Drought Forecasting Based on Climate Signals Using Artificial Neural Network – A Case Study in Khanhhoa Province Vietnam

    Get PDF
    AbstractIn Khanhhoa Province (Vietnam) long-lasting droughts often occur, causing negative consequences for this region, so accurate drought forecasting is of paramount importance. Normally, drought index forecasting model uses previously lagged observations of the index itself and rainfall as input variables. Recently, climate signals are being also used as potential predictors. In this study, we use 3-month, 6-month, and 12-month of Standardized Precipitation Evapotranspiration Index (SPEI), with a calculation time during the period from 1977 to 2014. This paper aims at examining the lagged climate signals to predict SPEI at Khanhhoa province, using artificial neural network. Climate signals indices from Indian Ocean and Pacific Ocean surrounding study area were analysed to select five predictors for the model. These were combined with local variables (lagged SPEI and rainfall) and used as input variables in 16 different models for different forecast horizons. The results show that adding climate signals can achieve better prediction. Climate signals can be also used solely as predictors without using local variables – in this case they explain the variation SPEI (longer horizons, e.g.12-month) reaching 61 – 80%. The developed model can benefit developing long-term policies for reservoir and irrigation regulation and plant alternation schemes in the context of drought hazard

    Identifying major drivers of daily streamflow from large-scale atmospheric circulation with machine learning

    Get PDF
    Previous studies linking large-scale atmospheric circulation and river flow with traditional machine learning techniques have predominantly explored monthly, seasonal or annual streamflow modelling for applications in direct downscaling or hydrological climate-impact studies. This paper identifies major drivers of daily streamflow from large-scale atmospheric circulation using two reanalysis datasets for six catchments in Norway representing various Köppen-Geiger climate types and flood-generating processes. A nested loop of roughly pruned random forests is used for feature extraction, demonstrating the potential for automated retrieval of physically consistent and interpretable input variables. Random forest (RF), support vector machine (SVM) for regression and multilayer perceptron (MLP) neural networks are compared to multiple-linear regression to assess the role of model complexity in utilizing the identified major drivers to reconstruct streamflow. The machine learning models were trained on 31 years of aggregated atmospheric data with distinct moving windows for each catchment, reflecting catchment-specific forcing-response relationships between the atmosphere and the rivers. The results show that accuracy improves to some extent with model complexity. In all but the smallest, rainfall-driven catchment, the most complex model, MLP, gives a Nash-Sutcliffe Efficiency (NSE) ranging from 0.71 to 0.81 on testing data spanning five years. The poorer performance by all models in the smallest catchment is discussed in relation to catchment characteristics, sub-grid topography and local variability. The intra-model differences are also viewed in relation to the consistency between the automatically retrieved feature selections from the two reanalysis datasets. This study provides a benchmark for future development of deep learning models for direct downscaling from large-scale atmospheric variables to daily streamflow in Norway.publishedVersio

    Multiobjective direct policy search using physically based operating rules in multireservoir systems

    Get PDF
    supplemental_data_wr.1943-5452.0001159_ritter.pdf (492 KB)This study explores the ways to introduce physical interpretability into the process of optimizing operating rules for multireservoir systems with multiple objectives. Prior studies applied the concept of direct policy search (DPS), in which the release policy is expressed as a set of parameterized functions (e.g., neural networks) that are optimized by simulating the performance of different parameter value combinations over a testing period. The problem with this approach is that the operators generally avoid adopting such artificial black-box functions for the direct real-time control of their systems, preferring simpler tools with a clear connection to the system physics. This study addresses this mismatch by replacing the black-box functions in DPS with physically based parameterized operating rules, for example by directly using target levels in dams as decision variables. This leads to results that are physically interpretable and may be more acceptable to operators. The methodology proposed in this work is applied to a network of five reservoirs and four power plants in the Nechi catchment in Colombia, with four interests involved: average energy generation, firm energy generation, flood hazard, and flow regime alteration. The release policy is expressed depending on only 12 parameters, which significantly reduces the computational complexity compared to existing approaches of multiobjective DPS. The resulting four-dimensional Pareto-approximate set offers a variety of operational strategies from which operators may choose one that corresponds best to their preferences. For demonstration purposes, one particular optimized policy is selected and its parameter values are analyzed to illustrate how the physically based operating rules can be directly interpreted by the operators.Peer ReviewedPreprin

    Effect Of Different Hydrological Model Structures On The Assimilation Of Distributed Uncertain Observations

    Full text link
    The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt

    Prediction Of Hydrological Models’ Uncertainty By A Committee Of Machine Learning-Models

    Full text link
    This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models\u27 uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013

    Assimilation Of Heterogeneous Uncertain Data, Having Different Observational Errors, In Hydrological Models

    Full text link
    Accurate real-time forecasting of river water level is an important issue that has to be addressed in order to prevent and mitigate water-related risk. To this end, data assimilation methods have been used to improve the forecasts ability of water model merging observations coming from stations and model simulations. As a consequence of the increasing availability of dynamic and cheap sensors, having variable life-span, space and temporal coverage, the citizens are becoming an active part in information capturing, evaluation and communication. On the other hand, it is difficult to assess the uncertain related to the observation coming from such sensors. The main objective of this work is to evaluate the influence of the observational error in the proposed assimilation methodologies used to update the hydrological model as response of dynamic observations of water discharge. We tested the developed approaches on a test study area - the Brue catchment, located in the South West of England, UK. Two different filtering approaches, Ensemble Kalman filter and Particle filter, were applied to the semi-distributed hydrological model. Discharge observations were synthetically generated as a function of the observed and simulated value of flow at the basin outlet. Different types of observational error were introduced assuming diverse sets of probability distributions, first and second order moments. The results of this work show how the assimilation of dynamic observations, in time and space, can improve the hydrologic model performance with a better forecast of flood events. It was found that the choice of the appropriate observational error, of difficult characterization, and type of filtering approach affects the model accuracy. This study is partly supported by the FP7 EU Project WeSenseIt

    Precipitation Sensor Network Optimal Design Using Time-Space Varying Correlation Structure

    Full text link
    Design of optimal precipitation sensor networks is a common topic in hydrological literature, however this is still an open problem due to lack of understanding of some spatially variable processes, and assumptions that often cannot be verified. Among these assumptions lies the homoscedasticity of precipitation fields, common in hydrological practice. To overcome this, it is proposed a local intensity-variant covariance structure, which in the broad extent, provides a fully updated correlation structure as long as new data are coming into the system. These considerations of intensity-variant correlation structure will be tested in the design of a precipitation sensor network for a case study, improving the estimation of precipitation fields, and thus, reducing the input uncertainty in hydrological models, especially in the scope of rainfall-runoff models
    • …
    corecore